196 research outputs found

    Balancing generalization and lexical conservatism : an artificial language study with child learners

    Get PDF
    Successful language acquisition involves generalization, but learners must balance this against the acquisition of lexical constraints. Such learning has been considered problematic for theories of acquisition: if learners generalize abstract patterns to new words, how do they learn lexically-based exceptions? One approach claims that learners use distributional statistics to make inferences about when generalization is appropriate, a hypothesis which has recently received support from Artificial Language Learning experiments with adult learners (Wonnacott, Newport, & Tanenhaus, 2008). Since adult and child language learning may be different (Hudson Kam & Newport, 2005), it is essential to extend these results to child learners. In the current work, four groups of children (6 years) were each exposed to one of four semi-artificial languages. The results demonstrate that children are sensitive to linguistic distributions at and above the level of particular lexical items, and that these statistics influence the balance between generalization and lexical conservatism. The data are in line with an approach which models generalization as rational inference and in particular with the predictions of the domain general hierarchical Bayesian model developed in Kemp, Perfors & Tenenbaum, 2006. This suggests that such models have relevance for theories of language acquisition

    Acquiring variation in an artificial language: children and adults are sensitive to socially conditioned linguistic variation

    Get PDF
    Languages exhibit sociolinguistic variation, such that adult native speakers condition the usage of linguistic variants on social context, gender, and ethnicity, among other cues. While the existence of this kind of socially conditioned variation is well-established, less is known about how it is acquired. Studies of naturalistic language use by children provide various examples where children's production of sociolinguistic variants appears to be conditioned on similar factors to adults’ production, but it is difficult to determine whether this reflects knowledge of sociolinguistic conditioning or systematic differences in the input to children from different social groups. Furthermore, artificial language learning experiments have shown that children have a tendency to eliminate variation, a process which could potentially work against their acquisition of sociolinguistic variation. The current study used a semi-artificial language learning paradigm to investigate learning of the sociolinguistic cue of speaker identity in 6-year-olds and adults. Participants were trained and tested on an artificial language where nouns were obligatorily followed by one of two meaningless particles and were produced by one of two speakers (one male, one female). Particle usage was conditioned deterministically on speaker identity (Experiment 1), probabilistically (Experiment 2), or not at all (Experiment 3). Participants were given tests of production and comprehension. In Experiments 1 and 2, both children and adults successfully acquired the speaker identity cue, although the effect was stronger for adults and in Experiment 1. In addition, in all three experiments, there was evidence of regularization in participants' productions, although the type of regularization differed with age: children showed regularization by boosting the frequency of one particle at the expense of the other, while adults regularized by conditioning particle usage on lexical items. Overall, results demonstrate that children and adults are sensitive to speaker identity cues, an ability which is fundamental to tracking sociolinguistic variation, and that children's well-established tendency to regularize does not prevent them from learning sociolinguistically conditioned variation

    Disfluency in dialogue:an intentional signal from the speaker?

    Get PDF
    Disfluency is a characteristic feature of spontaneous human speech, commonly seen as a consequence of problems with production. However, the question remains open as to why speakers are disfluent: Is it a mechanical by-product of planning difficulty, or do speakers use disfluency in dialogue to manage listeners' expectations? To address this question, we present two experiments investigating the production of disfluency in monologue and dialogue situations. Dialogue affected the linguistic choices made by participants, who aligned on referring expressions by choosing less frequent names for ambiguous images where those names had previously been mentioned. However, participants were no more disfluent in dialogue than in monologue situations, and the distribution of types of disfluency used remained constant. Our evidence rules out at least a straightforward interpretation of the view that disfluencies are an intentional signal in dialogue. © 2012 Psychonomic Society, Inc

    Structural priming in artificial languages and the regularisation of unpredictable variation

    Get PDF
    We present a novel experimental technique using artificial language learning to investigate the relationship between structural priming during communicative interaction, and linguistic regularity. We use unpredictable variation as a test-case, because it is a well-established paradigm to study learners’ biases during acquisition, transmission and interaction. We trained participants on artificial languages exhibiting unpredictable variation in word order, and subsequently had them communicate using these artificial languages. We found evidence for structural priming in two different grammatical constructions and across human-human and human-computer interaction. Priming occurred regardless of behavioral convergence: communication led to shared word order use only in human-human interaction, but priming was observed in all conditions. Furthermore, interaction resulted in the reduction of unpredictable variation in all conditions, suggesting a role for communicative interaction in eliminating unpredictable variation. Regularisation was strongest in human-human interaction and in a condition where participants believed they were interacting with a human but were in fact interacting with a computer. We suggest that participants recognize the counter-functional nature of unpredictable variation and thus act to eliminate this variability during communication. Furthermore, reciprocal priming occurring in human-human interaction drove some pairs of participants to converge on maximally regular, highly predictable linguistic systems. Our method offers potential benefits to both the artificial language learning and the structural priming fields, and provides a useful tool to investigate communicative processes that lead to language change and ultimately language design

    Acquiring and processing verb argument structure : distributional learning in a miniature language

    Get PDF
    Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings

    The cognitive roots of regularization in language

    Get PDF
    Regularization occurs when the output a learner produces is less variable than the linguistic data they observed. In an artificial language learning experiment, we show that there exist at least two independent sources of regularization bias in cognition: a domain-general source based on cognitive load and a domain-specific source triggered by linguistic stimuli. Both of these factors modulate how frequency information is encoded and produced, but only the production-side modulations result in regularization (i.e. cause learners to eliminate variation from the observed input). We formalize the definition of regularization as the reduction of entropy and find that entropy measures are better at identifying regularization behavior than frequency-based analyses. Using our experimental data and a model of cultural transmission, we generate predictions for the amount of regularity that would develop in each experimental condition if the artificial language were transmitted over several generations of learners. Here we find that the effect of cognitive constraints can become more complex when put into the context of cultural evolution: although learning biases certainly carry information about the course of language evolution, we should not expect a one-to-one correspondence between the micro-level processes that regularize linguistic datasets and the macro-level evolution of linguistic regularity.Comment: 21 page

    Harmonic biases in child learners: In support of language universals

    Get PDF
    A fundamental question for cognitive science concerns the ways in which languages are shaped by the biases of language learners. Recent research using laboratory language learning paradigms, primarily with adults, has shown that structures or rules that are common in the languages of the world are learned or processed more easily than patterns that are rare or unattested. Here we target child learners, investigating a set of biases for word order learning in the noun phrase studied by Culbertson, Smolensky & Legendre (2012) in college-age adults. We provide the first evidence that child learners exhibit a preference for typologically common harmonic word order patterns—those which preserve the order of the head with respect to its complements—validating the psychological reality of a principle formalized in many different linguistic theories. We also discuss important differences between child and adult learners in terms of both the strength and content of the biases at play during language learning. In particular, the bias favoring harmonic patterns is markedly stronger in children than adults, and children (unlike adults) acquire adjective ordering more readily than numeral ordering. The results point to the importance of investigating learning biases across development in order to understand how these biases may shape the history and structure of natural languages

    Simplicity and specificity in language:Domain general biases have domain specific effects

    Get PDF
    The extent to which the linguistic system—its architecture, the representations it operates on, the constraints it is subject to—is specific to language has broad implications for cognitive science and its relation to evolutionary biology. Importantly, a given property of the linguistic system can be specific to the domain of language in several ways. For example, if the property evolved by natural selection under the pressure of the linguistic function it serves then the property is domain­-specific in the sense that its design is tailored for language. Equally though, if that property evolved to serve a different function or if that property is domain-general, it may nevertheless interact with the linguistic system in a way that is unique. This gives a second sense in which a property can be thought of as specific to language. An evolutionary approach to the language faculty might at first blush appear to favor domain­-specificity in the first sense, with individual properties of the language faculty being specifically linguistic adaptations. However, we argue that interactions between learning, culture and biological evolution mean any domain-specific adaptations that evolve will take the form of weak biases rather than hard constraints. Turning to the latter sense of domain-specificity, we highlight a very general bias, simplicity, which operates widely in cognition and yet interacts with linguistic representations in a domain-­specific way
    corecore